Stochastic sensor scheduling via distributed convex optimization
نویسندگان
چکیده
منابع مشابه
AdaDelay: Delay Adaptive Distributed Stochastic Convex Optimization
We study distributed stochastic convex optimization under the delayed gradient model where theserver nodes perform parameter updates, while the worker nodes compute stochastic gradients. Wediscuss, analyze, and experiment with a setup motivated by the behavior of real-world distributedcomputation networks, where the machines are differently slow at different time. Therefore, we ...
متن کاملDistributed Stochastic Subgradient Projection Algorithms for Convex Optimization
We consider a distributed multi-agent network system where the goal is to minimize a sum of convex objective functions of the agents subject to a common convex constraint set. Each agent maintains an iterate sequence and communicates the iterates to its neighbors. Then, each agent combines weighted averages of the received iterates with its own iterate, and adjusts the iterate by using subgradi...
متن کاملDistributed Stochastic Optimization via Adaptive Stochastic Gradient Descent
Stochastic convex optimization algorithms are the most popular way to train machine learning models on large-scale data. Scaling up the training process of these models is crucial in many applications, but the most popular algorithm, Stochastic Gradient Descent (SGD), is a serial algorithm that is surprisingly hard to parallelize. In this paper, we propose an efficient distributed stochastic op...
متن کاملStochastic Convex Optimization
For supervised classification problems, it is well known that learnability is equivalent to uniform convergence of the empirical risks and thus to learnability by empirical minimization. Inspired by recent regret bounds for online convex optimization, we study stochastic convex optimization, and uncover a surprisingly different situation in the more general setting: although the stochastic conv...
متن کاملAsynchronous stochastic convex optimization
We show that asymptotically, completely asynchronous stochastic gradient procedures achieve optimal (even to constant factors) convergence rates for the solution of convex optimization problems under nearly the same conditions required for asymptotic optimality of standard stochastic gradient procedures. Roughly, the noise inherent to the stochastic approximation scheme dominates any noise from...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: Automatica
سال: 2015
ISSN: 0005-1098
DOI: 10.1016/j.automatica.2015.05.014